Skip to content

refactor(provider): Add lifecycle guards for OpenAI client reusability#7434

Open
Tz-WIND wants to merge 2 commits intoAstrBotDevs:masterfrom
Tz-WIND:main
Open

refactor(provider): Add lifecycle guards for OpenAI client reusability#7434
Tz-WIND wants to merge 2 commits intoAstrBotDevs:masterfrom
Tz-WIND:main

Conversation

@Tz-WIND
Copy link
Copy Markdown

@Tz-WIND Tz-WIND commented Apr 9, 2026

Modifications / 改动点

refactor(provider): 增强 OpenAI client 生命周期管理

在 openai_source.py 中引入了完善的 client 生命周期防护机制,
解决了配置重载和 terminate 后可能出现的 client 复用问题。

主要改动:

  • 新增 _create_openai_client() 方法,将 client 创建逻辑解耦
  • 新增 _is_underlying_client_closed() 方法,检测底层连接状态
  • 新增 _ensure_client() 方法,确保 client 可用性
  • 在 get_models/_query/_query_stream/text_chat/text_chat_stream/
    get_current_key/set_key 等方法中调用 _ensure_client()
  • 改进 terminate() 方法,使用 try/finally 确保 client 引用清空
  • 重构 init 方法,使用 _create_openai_client() 统一创建逻辑

这些改动确保了 client 在以下场景下的正确行为:

  1. 初始化时正常创建
  2. terminate 后能够正确重建
  3. reload 配置时不会复用已关闭的 client
  4. 底层 httpx.AsyncClient 关闭时自动重建

涉及的功能模块:Provider 管理、OpenAI 适配器

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

image image

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Refine lifecycle management for the OpenAI provider client to ensure safe reuse across initialization, termination, and configuration reload scenarios.

Enhancements:

  • Decouple OpenAI/Azure OpenAI client construction into dedicated helpers and centralize detection of closed underlying HTTP clients to support automatic client recreation.
  • Introduce a guard method that verifies client availability before use and apply it across model listing, query, chat, and key access operations.
  • Harden termination logic to always clear the client reference even when close() fails, preventing reuse of closed clients during reloads.

@auto-assign auto-assign bot requested review from Raven95676 and anka-afk April 9, 2026 09:53
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 9, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • The combination of _is_underlying_client_closed() and _ensure_client() runs on every request and will log a warning and recreate the client on every call if the SDK internals change (raising AttributeError); consider caching a boolean flag when detection fails so you only log once and avoid repeated costly client recreation attempts.
  • Client recreation in _ensure_client() is not synchronized, so in high-concurrency scenarios multiple coroutines could simultaneously see a closed client and race to create new instances and update self.default_params; consider adding a simple async lock around the recreation path to make this thread-safe.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The combination of `_is_underlying_client_closed()` and `_ensure_client()` runs on every request and will log a warning and recreate the client on every call if the SDK internals change (raising `AttributeError`); consider caching a boolean flag when detection fails so you only log once and avoid repeated costly client recreation attempts.
- Client recreation in `_ensure_client()` is not synchronized, so in high-concurrency scenarios multiple coroutines could simultaneously see a closed client and race to create new instances and update `self.default_params`; consider adding a simple async lock around the recreation path to make this thread-safe.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="501-510" />
<code_context>
+    def _is_underlying_client_closed(self) -> bool:
</code_context>
<issue_to_address>
**issue (bug_risk):** Treating AttributeError as "client is closed" may cause endless client recreation and log spam if the SDK internals change.

In `_is_underlying_client_closed`, an `AttributeError` currently logs a warning and returns `True`, causing `_ensure_client` to recreate the client on every call if `_client` or `is_closed` is ever removed/renamed. Consider instead returning `False` on `AttributeError` (treat as "unknown / assume open"), or at least gating the warning and recreation behind a one-time flag to avoid repeated reconstruction and log noise in that scenario.
</issue_to_address>

### Comment 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="462-464" />
<code_context>
                 self.custom_headers[key] = str(self.custom_headers[key])

-        if "api_version" in provider_config:
+        self.client = self._create_openai_client()
+
+        self.default_params = inspect.signature(
+            self.client.chat.completions.create,
+        ).parameters.keys()
</code_context>
<issue_to_address>
**suggestion:** The `default_params` initialization logic is duplicated and could be centralized.

The logic that sets `self.default_params` from `inspect.signature(self.client.chat.completions.create)` now lives in both `__init__` and `_ensure_client`. To avoid divergence when this logic changes, extract it into a helper (e.g., `_refresh_default_params()`) and call that from both places.

Suggested implementation:

```python
        self.client = self._create_openai_client()

        self._refresh_default_params()

        model = provider_config.get("model", "unknown")
        self.set_model(model)

```

```python
    def _refresh_default_params(self) -> None:
        """
        Refresh default_params based on the current OpenAI client.

        This centralizes the logic for inspecting the chat.completions.create
        signature so it can be reused from __init__ and any place the client
        is (re)initialized.
        """
        if self.client is None:
            self.default_params = ()
            return

        self.default_params = inspect.signature(
            self.client.chat.completions.create,
        ).parameters.keys()

    def _create_http_client(self, provider_config: dict) -> httpx.AsyncClient | None:
        """创建带代理的 HTTP 客户端"""
        proxy = provider_config.get("proxy", "")

```

You should also update `_ensure_client` (or any other place that recreates `self.client`) to call `self._refresh_default_params()` instead of duplicating the `inspect.signature(self.client.chat.completions.create)` logic. Specifically:
1. After any assignment to `self.client = ...` inside `_ensure_client`, add `self._refresh_default_params()`.
2. Remove any direct `self.default_params = inspect.signature(...).parameters.keys()` code from `_ensure_client`.
</issue_to_address>

### Comment 3
<location path="astrbot/core/provider/sources/openai_source.py" line_range="525" />
<code_context>
+            )
+            return True
+
+    def _ensure_client(self) -> None:
+        """确保 client 可用。如果 client 为 None 或底层连接已关闭,则重新创建。"""
+        if self.client is None or self._is_underlying_client_closed():
</code_context>
<issue_to_address>
**issue (complexity):** Consider introducing a unified `_ensure_client_with_key` helper to manage client creation and key binding so that call sites and `get_current_key` stay simple and side‑effect free.

You can reduce complexity around client/key lifecycle by centralizing “ensure client + key” and avoiding side‑effectful `get_current_key`:

```python
def _ensure_client_with_key(self, api_key: str | None = None) -> None:
    """Ensure client exists and is bound to the given key (if provided)."""
    if api_key is not None and api_key != self.chosen_api_key:
        self.chosen_api_key = api_key

    # Reuse your existing logic for closed/None detection
    if self.client is None or self._is_underlying_client_closed():
        self.client = self._create_openai_client()
        self.default_params = inspect.signature(
            self.client.chat.completions.create,
        ).parameters.keys()
    elif api_key is not None:
        # Keep ability to rotate keys on an existing client
        self.client.api_key = api_key
```

Then the callers become simpler and consistent:

```python
# text_chat
for retry_cnt in range(max_retries):
    try:
        self._ensure_client_with_key(chosen_key)
        llm_response = await self._query(payloads, func_tool)
        break
    ...

# text_chat_stream
for retry_cnt in range(max_retries):
    try:
        self._ensure_client_with_key(chosen_key)
        async for response in self._query_stream(payloads, func_tool):
            yield response
        break
    ...

# set_key
def set_key(self, key) -> None:
    self._ensure_client_with_key(key)

# get_current_key
def get_current_key(self) -> str:
    # Avoids creating/recreating a client as a side effect
    return self.chosen_api_key
```

This keeps all current behavior (client recreation on closed/None, key rotation on retries) but:

- Removes repeated `self._ensure_client()` + `self.chosen_api_key = ...` + `self.client.api_key = ...` blocks.
- Makes the “which key is bound to which client” rule explicit in a single place.
- Prevents `get_current_key` from unexpectedly constructing or recreating a client.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the OpenAI client initialization and lifecycle management within the openai_source.py file. It extracts client creation into dedicated methods, introduces logic to ensure the client is active before use (_ensure_client), and enhances the terminate method for safer client shutdown. A notable point of feedback is the reliance on private attributes of the OpenAI SDK (_client.is_closed) for checking client status, which could lead to instability with future SDK updates and should be monitored for public API alternatives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant